22 research outputs found

    Quantum Generative Adversarial Networks for Learning and Loading Random Distributions

    Full text link
    Quantum algorithms have the potential to outperform their classical counterparts in a variety of tasks. The realization of the advantage often requires the ability to load classical data efficiently into quantum states. However, the best known methods require O(2n)\mathcal{O}\left(2^n\right) gates to load an exact representation of a generic data structure into an nn-qubit state. This scaling can easily predominate the complexity of a quantum algorithm and, thereby, impair potential quantum advantage. Our work presents a hybrid quantum-classical algorithm for efficient, approximate quantum state loading. More precisely, we use quantum Generative Adversarial Networks (qGANs) to facilitate efficient learning and loading of generic probability distributions -- implicitly given by data samples -- into quantum states. Through the interplay of a quantum channel, such as a variational quantum circuit, and a classical neural network, the qGAN can learn a representation of the probability distribution underlying the data samples and load it into a quantum state. The loading requires O(poly(n))\mathcal{O}\left(poly\left(n\right)\right) gates and can, thus, enable the use of potentially advantageous quantum algorithms, such as Quantum Amplitude Estimation. We implement the qGAN distribution learning and loading method with Qiskit and test it using a quantum simulation as well as actual quantum processors provided by the IBM Q Experience. Furthermore, we employ quantum simulation to demonstrate the use of the trained quantum channel in a quantum finance application.Comment: 14 pages, 13 figure

    Quantum-Enhanced Simulation-Based Optimization

    Full text link
    In this paper, we introduce a quantum-enhanced algorithm for simulation-based optimization. Simulation-based optimization seeks to optimize an objective function that is computationally expensive to evaluate exactly, and thus, is approximated via simulation. Quantum Amplitude Estimation (QAE) can achieve a quadratic speed-up over classical Monte Carlo simulation. Hence, in many cases, it can achieve a speed-up for simulation-based optimization as well. Combining QAE with ideas from quantum optimization, we show how this can be used not only for continuous but also for discrete optimization problems. Furthermore, the algorithm is demonstrated on illustrative problems such as portfolio optimization with a Value at Risk constraint and inventory management.Comment: 9 pages, 9 figure

    From Tight Gradient Bounds for Parameterized Quantum Circuits to the Absence of Barren Plateaus in QGANs

    Full text link
    Barren plateaus are a central bottleneck in the scalability of variational quantum algorithms (VQAs), and are known to arise in various ways, from circuit depth and hardware noise to global observables. However, a caveat of most existing results is the requirement of t-design circuit assumptions that are typically not satisfied in practice. In this work, we loosen these assumptions altogether and derive tight upper and lower bounds on gradient concentration, for a large class of parameterized quantum circuits and arbitrary observables. By requiring only a couple of design choices that are constructive and easily verified, our results can readily be leveraged to rule out barren plateaus for explicit circuits and mixed observables, namely, observables containing a non-vanishing local term. This insight has direct implications for hybrid Quantum Generative Adversarial Networks (qGANs), a generative model that can be reformulated as a VQA with an observable composed of local and global terms. We prove that designing the discriminator appropriately leads to 1-local weights that stay constant in the number of qubits, regardless of discriminator depth. Combined with our first contribution, this implies that qGANs with shallow generators can be trained at scale without suffering from barren plateaus -- making them a promising candidate for applications in generative quantum machine learning. We demonstrate this result by training a qGAN to learn a 2D mixture of Gaussian distributions with up to 16 qubits, and provide numerical evidence that global contributions to the gradient, while initially exponentially small, may kick in substantially over the course of training

    Error Bounds for Variational Quantum Time Evolution

    Full text link
    Variational quantum time evolution allows us to simulate the time dynamics of quantum systems with near-term compatible quantum circuits. Due to the variational nature of this method the accuracy of the simulation is a priori unknown. We derive global phase agnostic error bounds for the state simulation accuracy with variational quantum time evolution that improve the tightness of fidelity estimates over existing error bounds. These analysis tools are practically crucial for assessing the quality of the simulation and making informed choices about simulation hyper-parameters. The efficient, a posteriori evaluation of the bounds can be tightly integrated with the variational time simulation and, hence, results in a minor resource overhead which is governed by the system's energy variance. The performance of the novel error bounds is demonstrated on numerical examples

    Option Pricing using Quantum Computers

    Full text link
    We present a methodology to price options and portfolios of options on a gate-based quantum computer using amplitude estimation, an algorithm which provides a quadratic speedup compared to classical Monte Carlo methods. The options that we cover include vanilla options, multi-asset options and path-dependent options such as barrier options. We put an emphasis on the implementation of the quantum circuits required to build the input states and operators needed by amplitude estimation to price the different option types. Additionally, we show simulation results to highlight how the circuits that we implement price the different option contracts. Finally, we examine the performance of option pricing circuits on quantum hardware using the IBM Q Tokyo quantum device. We employ a simple, yet effective, error mitigation scheme that allows us to significantly reduce the errors arising from noisy two-qubit gates.Comment: Fixed a typo. This article has been accepted in Quantu

    Simultaneous Perturbation Stochastic Approximation of the Quantum Fisher Information

    Get PDF
    The Quantum Fisher Information matrix (QFIM) is a central metric in promising algorithms, such as Quantum Natural Gradient Descent and Variational Quantum Imaginary Time Evolution. Computing the full QFIM for a model with dd parameters, however, is computationally expensive and generally requires O(d2)\mathcal{O}(d^2) function evaluations. To remedy these increasing costs in high-dimensional parameter spaces, we propose using simultaneous perturbation stochastic approximation techniques to approximate the QFIM at a constant cost. We present the resulting algorithm and successfully apply it to prepare Hamiltonian ground states and train Variational Quantum Boltzmann Machines

    Variational quantum algorithm for unconstrained black box binary optimization: Application to feature selection

    Get PDF
    We introduce a variational quantum algorithm to solve unconstrained black box binary optimization problems, i.e., problems in which the objective function is given as black box. This is in contrast to the typical setting of quantum algorithms for optimization where a classical objective function is provided as a given Quadratic Unconstrained Binary Optimization problem and mapped to a sum of Pauli operators. Furthermore, we provide theoretical justification for our method based on convergence guarantees of quantum imaginary time evolution. To investigate the performance of our algorithm and its potential advantages, we tackle a challenging real-world optimization problem: feature selection\textit{feature selection}. This refers to the problem of selecting a subset of relevant features to use for constructing a predictive model such as fraud detection. Optimal feature selection---when formulated in terms of a generic loss function---offers little structure on which to build classical heuristics, thus resulting primarily in ‘greedy methods’. This leaves room for (near-term) quantum algorithms to be competitive to classical state-of-the-art approaches. We apply our quantum-optimization-based feature selection algorithm, termed VarQFS, to build a predictive model for a credit risk data set with 2020 and 5959 input features (qubits) and train the model using quantum hardware and tensor-network-based numerical simulations, respectively. We show that the quantum method produces competitive and in certain aspects even better performance compared to traditional feature selection techniques used in today's industry
    corecore